Stratified filtered sampling in stochastic optimization

نویسندگان

  • Robert Rush
  • John M. Mulvey
  • John E. Mitchell
  • Thomas R. Willemain
چکیده

We develop a methodology for evaluating a decision strategy generated by a stochastic optimization model. The methodology is based on a pilot study in which we estimate the distribution of performance associated with the strategy, and define an appropriate stratified sampling plan. An algorithm we call filtered search allows us to implement this plan efficiently. We demonstrate the approach’s advantages with a problem in asset / liability management for an insurance company.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Accelerating Minibatch Stochastic Gradient Descent using Stratified Sampling

Stochastic Gradient Descent (SGD) is a popular optimization method which has been applied to many important machine learning tasks such as Support Vector Machines and Deep Neural Networks. In order to parallelize SGD, minibatch training is often employed. The standard approach is to uniformly sample a minibatch at each step, which often leads to high variance. In this paper we propose a stratif...

متن کامل

Quasi-monte Carlo Strategies for Stochastic Optimization

In this paper we discuss the issue of solving stochastic optimization problems using sampling methods. Numerical results have shown that using variance reduction techniques from statistics can result in significant improvements over Monte Carlo sampling in terms of the number of samples needed for convergence of the optimal objective value and optimal solution to a stochastic optimization probl...

متن کامل

Statistical correlation in stratified sampling

A new efficient technique to impose the statistical correlation when using the Monte Carlo type method for the statistical analysis of computational problems is proposed. The technique is based on the stochastic optimization method called Simulated Annealing. The comparison with other techniques presently used and intensive numerical testing showed the superiority and robustness of the method. ...

متن کامل

A Novel Stochastic Stratified Average Gradient Method: Convergence Rate and Its Complexity

SGD (Stochastic Gradient Descent) is a popular algorithm for large scale optimization problems due to its low iterative cost. However, SGD can not achieve linear convergence rate as FGD (Full Gradient Descent) because of the inherent gradient variance. To attack the problem, mini-batch SGD was proposed to get a trade-off in terms of convergence rate and iteration cost. In this paper, a general ...

متن کامل

Stochastic Learning on Imbalanced Data: Determinantal Point Processes for Mini-batch Diversification

We study a mini-batch diversification scheme for stochastic gradient descent (SGD). While classical SGD relies on uniformly sampling data points to form a mini-batch, we propose a non-uniform sampling scheme based on the Determinantal Point Process (DPP). The DPP relies on a similarity measure between data points and gives low probabilities to mini-batches which contain redundant data, and high...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • JAMDS

دوره 4  شماره 

صفحات  -

تاریخ انتشار 2000